-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
fix: Enhance ResponseAPI usage handling for None values and normalize… #15251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix: Enhance ResponseAPI usage handling for None values and normalize… #15251
Conversation
|
@Maximgitman is attempting to deploy a commit to the CLERKIEAI Team on Vercel. A member of the Team first needs to authorize it. |
| "citations": citations, | ||
| "thinking_blocks": thinking_blocks, | ||
| }, | ||
| thinking_blocks=thinking_blocks, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please revert this - this means users cannot pass back in thinking blocks when using anthropic reasoning.
the thinking blocks contain the message signatures and are a recognized top level param, which is a guaranteed standard spec across anthropic/bedrock/etc.
Instead - we can opt for a 'strict' mode, to drop non-openai fields - especially when using responses api for agents sdk. I think that's fine .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would recommend splitting this change from the responses api none usage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, totally agree.
Will split PRs, and test a 'strict' mode what we can do there
| "citations": citations, | ||
| "thinking_blocks": thinking_blocks, | ||
| }, | ||
| thinking_blocks=thinking_blocks, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would recommend splitting this change from the responses api none usage
be7f61d to
bf5246b
Compare
| ) | ||
| if isinstance(usage, dict): | ||
| usage_clean = usage.copy() | ||
| # Ensure numeric fields default to zero rather than None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this code is pretty hard to follow. what exactly are you trying to do?
can you make it simpler please
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @krrishdholakia
Ok, I will check today if there is a way to simplify
Thanks tho
Fix: OpenAI Agents SDK compatibility - Handle None usage values
Title
Fix: OpenAI Agents SDK compatibility - Handle None usage values in ResponseAPI
Relevant issues
Fixes compatibility issue with OpenAI Agents SDK where ResponseAPI returns
Nonevalues in usage fields causing Pydantic validation errors.Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitType
🐛 Bug Fix
Changes
Problem: None Values in Usage Fields
When using the OpenAI Agents SDK with LiteLLM's ResponseAPI, the SDK crashes when
usagefields containNonevalues instead of integers. This happens because:Nonefor token counts instead of0input_tokens_details,output_tokens_details) can beNoneor containNonevaluesNoneError:
Stack trace location:
Reproduction Example
Solution
Normalize usage fields:
Nonenumeric values to0Noneinput_tokens_details.cached_tokensdefaults to0output_tokens_details.reasoning_tokensdefaults to0This ensures full compatibility with OpenAI Agents SDK and other OpenAI-compatible clients.
Code Changes
Modified File:
litellm/responses/utils.py- Usage field normalizationBefore:
After:
Tests Added
Test: Enhanced
test_response_api_transform_usage_with_none_valuesintests/test_litellm/responses/test_responses_utils.pyTest Results
Related Tests (9 tests, all passing ✅)
$ poetry run pytest tests/test_litellm/responses/test_responses_utils.py -v tests/test_litellm/responses/test_responses_utils.py::TestResponsesAPIRequestUtils::test_get_optional_params_responses_api PASSED tests/test_litellm/responses/test_responses_utils.py::TestResponsesAPIRequestUtils::test_get_optional_params_responses_api_unsupported_param PASSED tests/test_litellm/responses/test_responses_utils.py::TestResponsesAPIRequestUtils::test_get_requested_response_api_optional_param PASSED tests/test_litellm/responses/test_responses_utils.py::TestResponsesAPIRequestUtils::test_decode_previous_response_id_to_original_previous_response_id PASSED tests/test_litellm/responses/test_responses_utils.py::TestResponsesAPIRequestUtils::test_update_responses_api_response_id_with_model_id_handles_dict PASSED tests/test_litellm/responses/test_responses_utils.py::TestResponseAPILoggingUtils::test_is_response_api_usage_true PASSED tests/test_litellm/responses/test_responses_utils.py::TestResponseAPILoggingUtils::test_is_response_api_usage_false PASSED tests/test_litellm/responses/test_responses_utils.py::TestResponseAPILoggingUtils::test_transform_response_api_usage_to_chat_usage PASSED tests/test_litellm/responses/test_responses_utils.py::TestResponseAPILoggingUtils::test_transform_response_api_usage_with_none_values PASSED ====================================================== 9 passed in 0.42s ======================================================Impact
Before this fix:
ValidationErrorwhen encounteringNoneusage valuesAfter this fix:
None)